14 research outputs found

    Perceptually Uniform Construction of Illustrative Textures

    Full text link
    Illustrative textures, such as stippling or hatching, were predominantly used as an alternative to conventional Phong rendering. Recently, the potential of encoding information on surfaces or maps using different densities has also been recognized. This has the significant advantage that additional color can be used as another visual channel and the illustrative textures can then be overlaid. Effectively, it is thus possible to display multiple information, such as two different scalar fields on surfaces simultaneously. In previous work, these textures were manually generated and the choice of density was unempirically determined. Here, we first want to determine and understand the perceptual space of illustrative textures. We chose a succession of simplices with increasing dimensions as primitives for our textures: Dots, lines, and triangles. Thus, we explore the texture types of stippling, hatching, and triangles. We create a range of textures by sampling the density space uniformly. Then, we conduct three perceptual studies in which the participants performed pairwise comparisons for each texture type. We use multidimensional scaling (MDS) to analyze the perceptual spaces per category. The perception of stippling and triangles seems relatively similar. Both are adequately described by a 1D manifold in 2D space. The perceptual space of hatching consists of two main clusters: Crosshatched textures, and textures with only one hatching direction. However, the perception of hatching textures with only one hatching direction is similar to the perception of stippling and triangles. Based on our findings, we construct perceptually uniform illustrative textures. Afterwards, we provide concrete application examples for the constructed textures.Comment: 11 pages, 15 figures, to be published in IEEE Transactions on Visualization and Computer Graphic

    VEHICLE: Validation and exploration of the hierarchical integration of conflict event data

    Full text link
    The exploration of large-scale conflicts, as well as their causes and effects, is an important aspect of socio-political analysis. Since event data related to major conflicts are usually obtained from different sources, researchers developed a semi-automatic matching algorithm to integrate event data of different origins into one comprehensive dataset using hierarchical taxonomies. The validity of the corresponding integration results is not easy to assess since the results depend on user-defined input parameters and the relationships between the original data sources. However, only rudimentary visualization techniques have been used so far to analyze the results, allowing no trustworthy validation or exploration of how the final dataset is composed. To overcome this problem, we developed VEHICLE, a web-based tool to validate and explore the results of the hierarchical integration. For the design, we collaborated with a domain expert to identify the underlying domain problems and derive a task and workflow description. The tool combines both traditional and novel visual analysis techniques, employing statistical and map-based depictions as well as advanced interaction techniques. We showed the usefulness of VEHICLE in two case studies and by conducting an evaluation together with conflict researchers, confirming domain hypotheses and generating new insights

    Challenges in Designing Natural Language Interfaces for Complex Visual Models

    No full text
    Voigt H, Meuschke M, Lawonn K, Zarrieß S. Challenges in Designing Natural Language Interfaces for Complex Visual Models. In: Proceedings of the First Workshop on Bridging Human-Computer Interaction and Natural Language Processing. Online: Association for Computational Linguistics; 2021: 66-73.Intuitive interaction with visual models becomes an increasingly important task in the field of Visualization (VIS) and verbal interaction represents a significant aspect of it. Vice versa, modeling verbal interaction in visual environments is a major trend in ongoing research in NLP. To date, research on Language {\&} Vision, however, mostly happens at the intersection of NLP and Computer Vision (CV), and much less at the intersection of NLP and Visualization, which is an important area in Human-Computer Interaction (HCI). This paper presents a brief survey of recent work on interactive tasks and set-ups in NLP and Visualization. We discuss the respective methods, show interesting gaps, and conclude by suggesting neural, visually grounded dialogue modeling as a promising potential for NLIs for visual models

    The Why and The How: A Survey on Natural Language Interaction in Visualization

    No full text
    Voigt H, Alaçam Ö, Meuschke M, Lawonn K, Zarrieß S. The Why and The How: A Survey on Natural Language Interaction in Visualization. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics; 2022: 348-374.Natural language as a modality of interaction is becoming increasingly popular in the field of visualization. In addition to the popular query interfaces, other language-based interactions such as annotations, recommendations, explanations, or documentation experience growing interest. In this survey, we provide an overview of natural language-based interaction in the research area of visualization. We discuss a renowned taxonomy of visualization tasks and classify 119 related works to illustrate the state-of-the-art of how current natural language interfaces support their performance. We examine applied NLP methods and discuss human-machine dialogue structures with a focus on initiative, duration, and communicative functions in recent visualization-oriented dialogue interfaces. Based on this overview, we point out interesting areas for the future application of NLP methods in the field of visualization

    Paparazzi: A Deep Dive into the Capabilities of Language and Vision Models for Grounding Viewpoint Descriptions

    No full text
    Voigt H, Hombeck J, Meuschke M, Lawonn K, Zarrieß S. Paparazzi: A Deep Dive into the Capabilities of Language and Vision Models for Grounding Viewpoint Descriptions. In: Findings of the Association for Computational Linguistics: EACL 2023. Dubrovnik, Croatia: Association for Computational Linguistics; 2023: 828-843.Existing language and vision models achieve impressive performance in image-text understanding. Yet, it is an open question to what extent they can be used for language understanding in 3D environments and whether they implicitly acquire 3D object knowledge, e.g. about different views of an object.In this paper, we investigate whether a state-of-the-art language and vision model, CLIP, is able to ground perspective descriptions of a 3D object and identify canonical views of common objects based on text queries.We present an evaluation framework that uses a circling camera around a 3D object to generate images from different viewpoints and evaluate them in terms of their similarity to natural language descriptions.We find that a pre-trained CLIP model performs poorly on most canonical views and that fine-tuning using hard negative sampling and random contrasting yields good results even under conditions with little available training data

    KeywordScape: Visual Document Exploration using Contextualized Keyword Embeddings

    No full text
    Voigt H, Meuschke M, Zarrieß S, Lawonn K. KeywordScape: Visual Document Exploration using Contextualized Keyword Embeddings. In: Che W, Shutova E, eds. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Stroudsburg, PA: Association for Computational Linguistics; 2022: 137-147.Although contextualized word embeddings have led to great improvements in automatic language understanding, their potential for practical applications in document exploration and visualization has been little explored. Common visualization techniques used for, e.g., model analysis usually provide simple scatter plots of token-level embeddings that do not provide insight into their contextual use. In this work, we propose KeywordScape, a visual exploration tool that allows to overview, summarize, and explore the semantic content of documents based on their keywords. While existing keyword-based exploration tools assume that keywords have static meanings, our tool represents keywords in terms of their contextualized embeddings. Our application visualizes these embeddings in a semantic landscape that represents keywords as islands on a spherical map. This keeps keywords with similar context close to each other, allowing for a more precise search and comparison of documents

    Visual Analysis of Aneurysm Data using Statistical Graphics

    No full text

    Visual Assistance in Development and Validation of Bayesian Networks for Clinical Decision Support

    No full text
    The development and validation of Clinical Decision Support Models (CDSM) based on Bayesian networks (BN) is commonly done in a collaborative work between medical researchers providing the domain expertise and computer scientists developing the decision support model. Although modern tools provide facilities for data-driven model generation, domain experts are required to validate the accuracy of the learned model and to provide expert knowledge for fine-tuning it while computer scientists are needed to integrate this knowledge in the learned model (hybrid modeling approach). This generally time-expensive procedure hampers CDSM generation and updating. To address this problem, we developed a novel interactive visual approach allowing medical researchers with less knowledge in CDSM to develop and validate BNs based on domain specific data mainly independently and thus, diminishing the need for an additional computer scientist. In this context, we abstracted and simplified the common workflow in BN development as well as adjusted the workflow to medical experts needs. We demonstrate our visual approach with data of endometrial cancer patients and evaluated it with six medical researchers who are domain experts in the gynecological field
    corecore